503 research outputs found

    A comparison of Poisson and uniform sampling for active measurements

    Get PDF
    Copyright © 2006 IEEEActive probes of network performance represent samples of the underlying performance of a system. Some effort has gone into considering appropriate sampling patterns for such probes, i.e., there has been significant discussion of the importance of sampling using a Poisson process to avoid biases introduced by synchronization of system and measurements. However, there are unanswered questions about whether Poisson probing has costs in terms of sampling efficiency, and there is some misinformation about what types of inferences are possible with different probe patterns. This paper provides a quantitative comparison of two different sampling methods. This paper also shows that the irregularity in probing patterns is useful not just in avoiding synchronization, but also in determining frequency-domain properties of a system. This paper provides a firm basis for practitioners or researchers for making decisions about the type of sampling they should use in a particular applications, along with methods for the analysis of their outputs.Matthew Rougha

    A Comparison of Poisson and Uniform Sampling for Active Measurements

    Full text link

    On the correlation of internet packet losses

    Get PDF
    Copyright © 2008 IEEEIn this paper we analyze more than 100 hours of packet traces from Planet-Lab measurements to study the correlation of Internet packet losses. We first apply statistical tests to identify the correlation timescale of the binary loss data. We find that in half of the traces packet losses are far from independent. More significantly, the correlation timescale of packet losses is correlated with the network load. We then examine the loss runs and the success runs of packets. The loss runs are typically short, regardless of the network load. We find that the success runs in the majority of our traces are also uncorrelated. Furthermore, their correlation timescale also does not depend on the network load. All of these results show that the impact of network load on the correlation of packet losses is nontrivial and that loss runs and success runs are better modeled as being independent than the binary losses themselves. © 2008 IEEE.Hung X. Nguyen and Matthew Rougha

    Internet scalability: properties and evolution

    Get PDF
    Copyright © 2008 IEEEMatthew Roughan; Steve Uhlig; Walter Willinge

    Topology reconstruction and characterisation of wireless ad hoc networks

    Get PDF
    © Copyright 2007 IEEEWireless ad hoc networks provide a useful communications infrastructure for the mobile battlefield. In this paper we apply and develop passive radio frequency signal strength monitoring and packet transmission time profiling techniques, to characterise and reconstruct an encrypted wireless network's topology. We show that by using signal strength measurements from three or more wireless probes and by assuming the use of carrier sense multiple access with collision avoidance, for physical layer control, we can produce a representation of a wireless network's logical topology and in some cases reconstruct the physical topology. Smoothed Kalman Altering is used to track the reconstructed topology over time, and in conjunction with a weighted least squares template fitting technique, enables the profiling of the individual network nodes and the characterisation of their transmissions. © 2007 Crown Copyright.http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=4289257&isnumber=428867

    Node localisation in wireless ad hoc networks

    Get PDF
    Wireless ad hoc networks often require a method for estimating their nodes' locations. Typically this is achieved by the use of pair-wise measurements between nodes and their neighbours, where a number of nodes already accurately know their location and the remaining nodes must calculate theirs using these known locations. Typically, a minimum mean square estimate (MMSE), or a maximum likelihood estimate (MLE) is used to generate the unknown node locations, making use of range estimates derived from measurements between the nodes. In this paper we investigate the efficacy of using radio frequency, received signal strength (RSS) measurements for the accurate location of the transmitting nodes over long ranges. We show with signal strength measurements from three or more wireless probes in noisy propagation conditions, that by using a weighted MMSE approach we can obtain significant improvements in the variance of the location estimate over both the standard MMSE and MLE approaches.Jon Arnold, Nigel Bean, Miro Kraetzl, Matthew Rougha

    Estimating point-to-point and point-to-multipoint traffic matrices: An information-theoretic approach

    Get PDF
    © 2005 IEEE.Traffic matrices are required inputs for many IP network management tasks, such as capacity planning, traffic engineering, and network reliability analysis. However, it is difficult to measure these matrices directly in large operational IP networks, so there has been recent interest in inferring traffic matrices from link measurements and other more easily measured data. Typically, this inference problem is ill-posed, as it involves significantly more unknowns than data. Experience in many scientific and engineering fields has shown that it is essential to approach such ill-posed problems via "regularization". This paper presents a new approach to traffic matrix estimation using a regularization based on "entropy penalization". Our solution chooses the traffic matrix consistent with the measured data that is information-theoretically closest to a model in which source/destination pairs are stochastically independent. It applies to both point-to-point and point-to-multipoint traffic matrix estimation. We use fast algorithms based on modern convex optimization theory to solve for our traffic matrices. We evaluate our algorithm with real backbone traffic and routing data, and demonstrate that it is fast, accurate, robust, and flexible.Yin Zhang, Member, Matthew Roughan, Carsten Lund, and David L. Donoh

    Self-similar traffic and network dynamics

    Get PDF
    Copyright © 2002 IEEEOne of the most significant findings of traffic measurement studies over the last decade has been the observed self-similarity in packet network traffic. Subsequent research has focused on the origins of this self-similarity, and the network engineering significance of this phenomenon. This paper reviews what is currently known about network traffic self-similarity and its significance. We then consider a matter of current research, namely, the manner in which network dynamics (specifically, the dynamics of transmission control protocol (TCP), the predominant transport protocol used in today's Internet) can affect the observed self-similarity. To this end, we first discuss some of the pitfalls associated with applying traditional performance evaluation techniques to highly-interacting, large-scale networks such as the Internet. We then present one promising approach based on chaotic maps to capture and model the dynamics of TCP-type feedback control in such networks. Not only can appropriately chosen chaotic map models capture a range of realistic source characteristics, but by coupling these to network state equations, one can study the effects of network dynamics on the observed scaling behavior. We consider several aspects of TCP feedback, and illustrate by examples that while TCP-type feedback can modify the self-similar scaling behavior of network traffic, it neither generates it nor eliminates it.Ashok Erramilli, Matthew Roughan, Darryl Veitch and Walter Willinge

    Where’s Waldo? practical searches for stability in iBGP

    Get PDF
    Copyright © 2008 IEEEWhat does a child’s search of a large, complex cartoon for the eponymous character (Waldo) have to do with Internet routing? Network operators also search complex datasets, but Waldo is the least of their worries. Routing oscillation is a much greater concern. Networks can be designed to avoid routing oscillation, but the approaches so far proposed unnecessarily reduce the configuration flexibility. More importantly, apparently minor changes to a configuration can lead to instability. Verification of network stability is therefore an important task, but unlike the child’s search, this problem is NP hard. Until now, no practical method was available for large networks. In this paper, we present an efficient algorithm for proving stability of iBGP, or finding the potential oscillatory modes, and demonstrate its efficacy by applying it to the iBGP configuration of a large Tier-2AS.Ashley Flavel, Matthew Roughan, Nigel Bean and Aman Shaik

    Assessing the Use of Area- and Time-Averaging Based on Known De-correlation Scales to Provide Satellite Derived Sea Surface Temperatures in Coastal Areas

    Get PDF
    Satellite derived sea surface temperatures (SSTs) are often used as a proxy for in situ water temperatures, as they are readily available over large spatial and temporal scales. However, contamination of satellite images can prohibit their use in coastal areas. We compared in situ temperatures to SST foundation (~10 m depth) at 31 sites inshore of the East Australian Current (EAC), the dynamic western boundary current of the south Pacific gyre, using an area averaging approach to overcome coastal contamination. Varying across- and along-shelf distances were used to area average SST measurements and de-correlation time scales were used to gap fill data. As the EAC is typically anisotropic (dominant along-shore flow) the choice of across-shelf distances influenced the correlation with in situ temperatures more than along-shelf distances. However, the “optimal” distances for both measurements were within known de-correlation length scales. Incorporating both SST area and time averaging (based on de-correlation time scales) produced data for an average of 96% of days that in situ loggers were deployed, compared to 27% (52%) without (with) area averaging. Temperature differences between the in situ data and SSTs varied depending on time of year, with higher differences in the austral summer when daily in situ temperatures can range by up to 4.20°C. The differences between the in situ and SST measurements were, however, significant with or without area averaging (t-test: p-values < 0.05). Nevertheless, when using the area averaging approaches SSTs were only an average of ~1.05°C different from in situ temperatures and less than in situ temperature fluctuations. Linear mixed models revealed that latitude, distance to the coast and nearest estuary did not influence the difference between the in situ and satellite data as much as the water depth. This study shows that using de-correlation length and time scales to inform how to process satellite data can overcome contamination and missing data thereby greatly increasing the coverage and utility of SST data, particularly in coastal areas
    • …
    corecore